New technologies often enable lawyers to deliver services more efficiently.
However, adopting new tech in a law firm requires a careful and considered approach, balancing the benefits with our professional responsibilities. The complexity of artificial intelligence (AI) technologies make this balancing act a particularly challenging one for AI-enabled products.
The need for a clear AI policy
It is likely that AI tools specifically for lawyers will become commonplace from late 2023. No matter how useful the tool is, whether and how to use it should be a considered decision taken at practice level, not left up to the discretion of individual staff.
This is not a suggestion that AI should not be used. The ability to do this well is likely to be an important professional and business skill in the near future.1 The reality is also that, as more software providers include AI functionality in their products, avoiding AI is going to become extremely difficult.
The objective of an AI policy is to:
- ensure only approved products are used
- ensure that products are used appropriately and ethically, and
- focus attention on underlying risks and how to avoid these.
Ethical compliance
Law is not just any other business. We stand in a special relationship to our clients and to the courts. While the ethics of AI use (both in the general sense and in law firms specifically) is likely to develop fairly rapidly along with the technology, several longstanding principles already apply.
1. Competence and diligence (ASCR Rule 4)
Solicitors must act with competence and diligence in the provision of legal services.2 In this context there are two facets to the duty of competence:
- legal competence: that the supplied work is of an acceptable standard, and
- technological competence: that any technology utilised in service delivery is used appropriately.3
AI tools in their initial development phase may produce inconsistent results, offering both accurate and erroneous information. Solicitors leveraging AI solutions cannot delegate responsibility for the work’s quality and must have enough knowledge of the pertinent law and specific facts to identify errors.
While legal professionals are not required to be IT specialists, they should possess a foundational understanding of the tools they employ, including their constraints and basic operations. This applies whether the tool is a word processor or an AI-driven document assembly system.
2. Confidentiality (ASCR Rule 9)
A solicitor’s duty of confidentiality4 extends beyond not disclosing information; it also requires appropriate precautions to safeguard against unintentional data loss and from collateral use of the confidential data for a purpose other than for which it was entrusted.
Reading and understanding the provider’s privacy and data use policy is essential. Even an excellent tool might be inappropriate for a law practice if it allows too much access to client information.
AI software must be trained to be effective. To perform legal functions, it will require access to a great deal of structured legal data, such as the contents of a law firm’s practice management system.
Providers will have a strong commercial imperative to acquire as much access to such data as they can, and they might not be overly scrupulous about the methods they employ to achieve their commercial goal, which is – ultimately – a tool which can replicate a human’s work with as little human input as possible.
Consequently, law firms need to be cautious when selecting AI providers. We must ensure they adhere to strict data privacy and security standards, and thoroughly consider what the vendor will do with the information it accesses.
While the AI product is in use, it can adapt based on the data it receives and any corrections or reactions to the draft output. The AI may analyse patterns within the structured data with the goal of refining its system for subsequent use. Even if the client’s data is not permanently saved in the provider’s system, the collection and retention of these patterns might negatively affect the client’s interests. This is especially likely if the material is unusual or includes a lot of technical knowledge.
For example, Thompson Reuters was very concerned about the ROSS AI platform, which was given access to Westlaw’s precedent and technical library. Although the AI did not directly copy Westlaw’s product, they were concerned that the essence of that material – the structured patterns within the data – would be replicated.
The matter was never adjudicated as ROSS chose to discontinue the product instead, but there will no doubt be a lot of litigation on this topic before the issue is settled.5 If your client is entrusting your firm with a lot of proprietary information, you should work through the issues with them carefully prior to using an AI product.
If you need an analogy, the AI provider does not really need the oranges (the data) if they can just drink the juice (the patterns within that data).
This could also impact the firm’s interests. If you specialise in a niche area, the materials you provide access to might be quite distinctive. When an AI system has a relatively limited dataset to learn from, it may unavoidably reproduce the few patterns it is familiar with. Consequently, your firm’s data may permit the AI to supply very similar documents to a competitor.
What does all this mean practically?
The AI provider should offer a distinct guarantee that:
- data, even when anonymised, will not be copied or stored beyond the need for immediate processing, and
- the provided data will not be utilised for AI training purposes.
These aspects may differ between various systems. For instance, Chat GPT-4 did not use data submitted via the chat text box for AI training; however, until 1 March, it did use data submitted through third-party tools that were built on the GPT architecture. Just because a tool is ‘powered by’ a familiar AI system, the access and data use agreement may be entirely different.
3. Conflicts of interest (ASCR Rule 10, 11), misuse of proprietary data
Even if you consider that the use of client data to train a system does not ‘disclose’ the data (or does so for the purposes of providing the client’s services, which is a permissible exception to rule 9 duty of confidence),6 the issue is not necessarily resolved.
The possibility that a firm might use the confidential information of one client to benefit another is the underlying basis of the rule prohibiting acting for opposing clients, either concurrently7 or successively.8 To constitute a conflict usually requires information which is sufficiently relevant to a dispute or commercial competition between them.
However, a system which takes client data and uses it to improve a firm tool that will be used to service other clients, including competitors, may fall within that concern. It is also a potential misuse of the data entrusted to you by the client (held as a fiduciary) or by an opposing party (a potential breach of your ‘Harman Undertaking’).
This is an interesting area for development. It has never been suggested that a firm is precluded from taking work developed for one client and putting it into their precedent bank, provided that it is appropriately anonymised. This may be analogous.
However, law firms should consider appropriately worded consents in their client agreements that will clear the way for future use of client data to train AI tools. In any event, data so used should be anonymised as a precursor to inclusion in the training corpus.
4. Supervision (ASCR Rule 13)
In the context of AI, law firms must ensure that practitioners and staff are adequately trained and supervised in the use of AI tools, and that they understand the importance of maintaining professional standards when employing these technologies.
5. Data security
Providers may make various assurances in their privacy policies, but without strong data security protocols in place, these guarantees carry little weight. The QLS Guide to Cloud Service Providers offers additional information on pertinent factors to consider.
6. Transparency and explainability
A key challenge in the adoption of AI tools in legal practice is transparency. This will be less of a problem initially as the early AI tools will be little more than fancy word processors that can assemble and write text quickly.
However, once the system is taking client data and generating recommendations as to an appropriate course of action, the client will need to know which elements are based on human judgment and which were artificial.
Decision review: In cases where AI tools are used to make decisions or provide recommendations, law firms should have processes in place to review and validate the outcomes.
This will necessarily involve the input of human experts, who can assess the AI-generated results in the context of their professional judgment and experience. Depending on the type of work, an audit confirmation rather than item-by-item check may be justifiable, but this needs to be very carefully assessed.
For example, it is clearly inappropriate to review every document that has been categorised using an e-discovery tool or there is little point in using it. It would be difficult to apply the same approach to auto-generated wills, at least until the error rate could be clearly tracked through a large sample group.
7. Liability and risk management
Professional indemnity insurance: If your firm’s business model adapts from providing legal advice and documents to providing access to a curated system which the client uses to generate these, you should consider whether that service falls within the insuring clauses of your professional indemnity insurer’s policy.
Contractual clauses: When engaging third-party AI providers, law firms should attempt appropriate risk transfer clauses, such as indemnification, warranties, and limitations of liability. This may be optimistic, as providers are likely to be very reluctant to accept third party liability.
8. Record keeping
What an AI was told, what it originated and what was done to check and complete this draft input is an essential part of the client file. This material should be extracted from the AI tool and kept as part of the coherent client file.
As AI becomes more prevalent in the legal industry, it is essential for practitioners to stay informed about new developments and best practices. Law firms should encourage and support their staff in undertaking relevant CPD activities, such as attending conferences, participating in workshops, or completing online courses related to AI in legal practice.
9. Additional resources
David Bowles is a Queensland Law Society ethics solicitor.
Footnotes
1 For an interesting overview of how lawyers can best use chatbots such as ChatGPT (as at early 2023) consider this paper from the University of Minnesota Law School: Daniel Schwarcz and Jonathon Choi, ‘AI Tools for Lawyers: A practical guide’ (2023) Social Science Research Network.
2 Queensland Law Society, Australian Solicitors’ Conduct Rules 2012 (at 1 June 2012) r4 (ASCR).
3 Arguably, the duty of technical competence includes staying sufficiently abreast of technological changes to advise a client when a more automated approach might be more efficient – in undertaking a ‘big document’ discovery exercise, for example.
4 Ibid r9.
5 To view the filed claim, see: ‘Thomson Reuters Enter. Ctr. GmbH v. ROSS Intelligence Inc.’, Case Text (web page, 29 March 2021).
6 Refer to ASCR (n1) r9.
7 Ibid r11.
8 Ibid r10.
Share this article